Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Multi-person collaborative creation system of building information modeling drawings based on blockchain
SHEN Yumin, WANG Jinlong, HU Diankai, LIU Xingyu
Journal of Computer Applications    2021, 41 (8): 2338-2345.   DOI: 10.11772/j.issn.1001-9081.2020101549
Abstract436)      PDF (1810KB)(412)       Save
Multi-person collaborative creation of Building Information Modeling (BIM) drawings is very important in large building projects. However, the existing methods of multi-person collaborative creation of BIM drawings based on Revit and other modeling software or cloud service have the confusion of BIM drawing version, difficulty of traceability, data security risks and other problems. To solve these problems, a blockchain-based multi-person collaborative creation system for BIM drawings was designed. By using the on-chain and off-chain collaborative storage method, the blockchain and database were used to store BIM drawings information after each creation in the BIM drawing creation process and the complete BIM drawings separately. The decentralization, traceability and anti-tampering characteristics of the blockchain were used to ensure that the version of the BIM drawings is clear, and provide a basis for the future copyright division. These characteristics were also used to enhance the data security of BIM drawings information. Experimental results show that the average block generation time of the proposed system in the multi-user concurrent case is 0.467 85 s, and the maximum processing rate of the system is 1 568 transactions per second, which prove the reliability of the system and that the system can meet the needs of actual application scenarios.
Reference | Related Articles | Metrics
Content distribution acceleration strategy in mobile edge computing
LIU Xing, YANG Zhen, WANG Xinjun, ZHU Heng
Journal of Computer Applications    2020, 40 (5): 1389-1391.   DOI: 10.11772/j.issn.1001-9081.2019091679
Abstract287)      PDF (490KB)(446)       Save

Focusing on the content distribution acceleration problem in Mobile Edge Computing (MEC), with the consideration of the influence of MEC server storage space limitation on content cache, with the object obtaining delays of the mobile users as optimization goal, an Interest-based Content Distribution Acceleration Strategy (ICDAS) was proposed. Considering the MEC server storage space, the interests of the mobile user groups on different objects and the file sizes of the objects, the objects were selectively cached on MEC servers, and the objects cached on MEC servers were timely updated in order to meet the content requirements of mobile user groups as more as possible. The experimental results show that the proposed strategy has good convergence performance, which cache hit ratio is relatively stable and significantly better than that of the existing strategies. When the system runs stably, compared with the existing strategies, this strategy can reduce the object data obtaining delay for users by 20%.

Reference | Related Articles | Metrics
Local feature point matching algorithm with anti-affine property
QIU Yunfei, LIU Xing
Journal of Computer Applications    2020, 40 (4): 1133-1137.   DOI: 10.11772/j.issn.1001-9081.2019091588
Abstract381)      PDF (635KB)(372)       Save
In order to solve the problems that the existing local feature matching algorithm has poor matching effect and high time cost on affine images,and RANdom SAmple Consensus(RANSAC)algorithm cannot obtain a good parameter model on affine image matching,Affine Accelerated KAZE(A-AKAZE)algorithm with anti-affine property was proposed and the vector field consistency was used to screen interior points. Firstly,the scale space was constructed by using the nonlinear function,then the feature points were detected by Hessian matrix,and the appropriate areas were selected as the feature sampling windows with the feature points as the centers. Secondly,the feature sampling windows were projected on longitude and latitude to simulate the influence of different angles on the image,and then the Affine Modified-Local Difference Binary(A-MLDB)descriptors with anti-affine property were extracted from the projection region. Finally,the interior points were extracted by the vector field consistency algorithm. Experimental results show that the correct matching rate of A-AKAZE algorithm is more than 20% higher than that of AKAZE algorithm,is about 15% higher than that of AKAZE+RANSAC algorithm,is about 10% higher than that of Affine Scale-Invariant Feature Transform(ASIFT)algorithm, and is 5% higher than that of ASIFT+RANSAC algorithm;at the same time,A-AKAZE algorithm has the matching speed much higher than AKAZE+RANSAC,ASIFT and ASIFT+RANSAC algorithms.
Reference | Related Articles | Metrics
Wireless sensor deployment optimization based on improved IHACA-CpSPIEL algorithm
DUAN Yujun, WANG Yaoli, CHANG Qing, LIU Xing
Journal of Computer Applications    2020, 40 (3): 793-798.   DOI: 10.11772/j.issn.1001-9081.2019071201
Abstract312)      PDF (747KB)(283)       Save
Aiming at the problems of low coverage and high communication cost for wireless sensor deployment, an Improved Heuristic Ant Colony Algorithm (IHACA) merging Chaos optimization of padded Sensor Placements at Informative and cost-Effective Locations algorithm (IHACA-CpSPIEL) method for sensor deployment was proposed. Firstly, the correlation between observation points and unobserved points was established by mutual information, and the communication cost was described in the form of graph theory to establish the mathematical model with submodularity. Secondly, chaos operator was introduced to improve the global searching ability of pSPIEL (padded Sensor Placements at Informative and cost-Effective Locations) algorithm for local parameters, and then the optimal number of clusters was found. Then, the factors of the colony distance heuristic function and the pheromone updating mechanism were changed to jump out of the local solution of communication cost. Finally, Chaos optimization of pSPIEL algorithm (CpSPIEL) was integrated with the IHACA to determine the shortest path, so as to achieve the purpose of low-cost deployment. The experimental results show that the proposed algorithm can jump out of the local optimal solution well, and the communication cost is reduced by 6.5% to 24.0% compared with the pSPIEL algorithm, and has a faster search speed.
Reference | Related Articles | Metrics
Next location recommendation based on spatiotemporal-aware GRU and attention
LI Quan, XU Xinhua, LIU Xinghong, CHEN Qi
Journal of Computer Applications    2020, 40 (3): 677-682.   DOI: 10.11772/j.issn.1001-9081.2019071289
Abstract728)      PDF (669KB)(427)       Save
Aiming at the problem that the influence of time and space information of the location was not considered when making the location recommendation by Gated Recurrent Unit (GRU) of recurrent neural network, the spatiotemporal-aware GRU model was proposed. In addition, aiming at the noise problem generated by the unrelated check-in data in check-in sequence, the next location recommendation method of SpatioTemporal-aware GRU and Attention (ST-GRU+Attention) was proposed. Firstly, time gate and distance gate were added in the GRU model by counting the time slot and distance gap between two locations. The influence of time and space information on recommending next location was controlled by setting the weight matrices. Secondly, the attention mechanism was introduced. The attention weight coefficients of the user were obtained by calculating the attention weight scores of the user preferences, and the personalized preference of the user was obtained. Finally, the objective function was constructed and the model parameters were learned by Bayesian Personalized Ranking (BPR) algorithm. The experimental results show that the accuracy of ST-GRU+Attention is improved significantly compared to the recommendation methods of Factorizing Personalized Markov Chain and Localized Region (FPMC-LR), Personalized Ranking Metric Embedding (PRME) and Spatial Temporal Recurrent Neural Network (ST-RNN), and the precision and recall of ST-GRU+Attention are increased by 15.4% and 17.1% respectively compared to those of ST-RNN which is the best of the three methods. The recommendation method of ST-GRU+Attention can effectively improve the effect of next location recommendation.
Reference | Related Articles | Metrics
Node recognition for different types of sugarcanes based on machine vision
SHI Changyou, WANG Meili, LIU Xinran, HUANG Huili, ZHOU Deqiang, DENG Ganran
Journal of Computer Applications    2019, 39 (4): 1208-1213.   DOI: 10.11772/j.issn.1001-9081.2018092016
Abstract572)      PDF (917KB)(316)       Save
The sugarcane node is difficult to recognize due to the diversity and complexity of surface that different types of sugarcane have. To solve the problem, a sugarcane node recognition method suitable for different types of sugarcane was proposed based on machine vision. Firstly, by the iterative linear fitting algorithm, the target region was extracted from the original image and its slope angle to horizontal axis was estimated. According to the angle, the target was rotated to being nearly parallel to the horizontal axis. Secondly, Double-Density Dual Tree Complex Wavelet Transform (DD-DTCWT) was used to decompose the image, and the image was reconstructed by using the wavelet coefficients that were perpendicular or approximately perpendicular to the horizontal axis. Finally, the line detection algorithm was used to detect the image, and the lines near the sugarcane node were obtained. The recognition was realized by further verifying the density, length and mutual distances of the edge lines. Experimental results show that the complete recognition rate reaches 92%, the localization accuracy of about 80% of nodes is less than 16 pixels, and the localization accuracy of 95% nodes is less than 32 pixels. The proposed method realizes node recognition for different types of sugarcane under different background with high position accuracy.
Reference | Related Articles | Metrics
Authentication scheme for smart grid communication based on elliptic curve cryptography
LIU Xindong, XU Shuishuai, CHEN Jianhua
Journal of Computer Applications    2019, 39 (3): 779-783.   DOI: 10.11772/j.issn.1001-9081.2018071486
Abstract477)      PDF (801KB)(275)       Save
To ensure the security and reliability of communication in the smart grid, more and more authentication protocols have been applied in the communication process. For the authentication protocol proposed by Mahmood et al. (MAHMOOD K, CHAUDHRY S A, NAQVI H, et al. An elliptic curve cryptography based lightweight authentication scheme for smart grid communication. Future Generation Computer Systems. 2018,81:557-565), some defects were pointed out. For example, this protocol can be easily attacked by internal privileged personnel, is lack of password replacement phase and unfriendly to users, in which unique username cannot be guaranteed, even a formula error exists. To improve this protocol, an authentication protocol based on elliptic curve was proposed. Firstly, a login phase between the user and the device was added in the improved protocol. Secondly, elliptic curve cryptography puzzle was used to realize information exchange. Finally, the password replacement phase was added. Through the formal analysis by BAN (Burrows-Abadi-Needha) logic, the improved protocol is safe and feasible, which can resist internal personnel attacks, has password replacement and unique username, and is more friendly to users.
Reference | Related Articles | Metrics
Semantic matching model of knowledge graph in question answering system based on transfer learning
LU Qiang, LIU Xingyu
Journal of Computer Applications    2018, 38 (7): 1846-1852.   DOI: 10.11772/j.issn.1001-9081.2018010186
Abstract1525)      PDF (1183KB)(626)       Save
To solve the problem that semantic matching between questions and relations in a single fact-based question answering system is difficult to obtain high accuracy in small-scale labeled samples, a transfer learning model based on Recurrent Neural Network (RNN) was proposed. Firstly, by the way of reconstructing sequences, an RNN-based sequence-to-sequence unsupervised learning algorithm was used to learn the semantic distribution (word vector and RNN) of questions in a large number of unlabeled samples. Then, by assigning values to the parameters of a neural network, the semantic distribution was used as the parameters of the supervised semantic matching algorithm. Finally, by the inner product of the question features and relation features, the semantic matching model was trained and generated in labeled samples. The experimental results show that compared with the supervised learning method Embed-AVG and RNNrandom, the accuracy of semantic matching of the proposed model is averagely increased by 5.6 and 8.8 percentage points respectively in an environment with a small number of labeled samples and a large number of unlabeled samples. The proposed model can significantly improve the accuracy of semantic matching in an environment with labeled samples by pre-learning the semantic distribution of a large number of unlabeled samples.
Reference | Related Articles | Metrics
Personalized test question recommendation method based on unified probalilistic matrix factorization
LI Quan, LIU Xinghong, XU Xinhua, LIN Song
Journal of Computer Applications    2018, 38 (3): 639-643.   DOI: 10.11772/j.issn.1001-9081.2017082071
Abstract508)      PDF (923KB)(483)       Save
In recent years, test question resources in online education has grown at an explosive rate. It is difficult for students to find appropriate questions from the mass of question resources. Many test question recommendation methods for students have been proposed to solve this problem. However, many problems exist in traditional test question recommendation methods based on unified probalilistic matrix factorization; especially information of student knowledge points is not considered, resulting in low accuracy of recommendation results. Therefore, a kind of personalized test question recommendation method based on unified probalilistic matrix factorization was proposed. Firstly, through a cognitive diagnosis model, the student knowledge point mastery information was obtained. Secondly, the process of unified probalilistic matrix factorization was executed by combining the information of students, test questions and knowledge points. Finally, according to the difficulty range, the test questions were recommended. The experimental results show that the proposed method gets the best recommedation results in the aspect of accuracy of question recommendation for different range of difficulty, compared to other traditional recommendation methods, and has a good application prospect.
Reference | Related Articles | Metrics
HIC-MedRank:improved drug recommendation algorithm based on heterogeneous information network
ZOU Linlin, LI Xueming, LI Xue, YUAN Hong, LIU Xing
Journal of Computer Applications    2017, 37 (8): 2368-2373.   DOI: 10.11772/j.issn.1001-9081.2017.08.2368
Abstract534)      PDF (1110KB)(630)       Save
With the rapid growth of medical literature, it is difficult for physicians to maintain up-to-date knowledge by reading biomedical literatures. An algorithm named MedRank can be used to recommend influential medications from literature by analyzing information network, based on the assumption that "a good treatment is likely to be found in a good medical article published in a good journal, written by good author(s)", recomending the most effective drugs for all types of disease patients. But the algorithm still has several problems:1) the diseases, as the inputs, are not independent; 2) the outputs are not specific drugs; 3) some other factors such as the publication time of the article are not considered; 4) there is no definition of "good" for the articles, journals and authors. An improved algorithm named HIC-MedRank was proposed by introducing H-index of authors, impact factor of journals and citation count of articles as criterion for defining good authors, journals and articles, and recommended antihypertensive agents for the patients suffered from Hypertension with Chronic Kidney Disease (CKD) by considering published time, support institutions, publishing type and some other factors of articles. The experimental results on Medline datasets show that the recommendation drugs of HIC-MedRank algorithm are more precise than those of MedRank, and are more recognized by attending physicians. The consistency rate is up to 80% by comparing with the JNC guidelines.
Reference | Related Articles | Metrics
Product property sentiment analysis based on neural network model
LIU Xinxing, JI Donghong, REN Yafeng
Journal of Computer Applications    2017, 37 (6): 1735-1740.   DOI: 10.11772/j.issn.1001-9081.2017.06.1735
Abstract682)      PDF (897KB)(851)       Save
Concerning the poor results of product property sentiment analysis by the simple neural network model based on word vector, a gated recursive neural network model of integrating discrete features and word vector embedding was proposed. Firstly, the sentences were modeled with direct recurrent graph and the gated recursive neural network model was adopted to complete product property sentiment analysis. Then, the discrete features and word vector embedding were integrated in the gated recursive neural network. Finally, the feature extraction and sentiment analysis were completed in three different task models:pipeline model, joint model and collapsed model. The experiments were done on laptop and restaurant review datasets of SemEval-2014, the macro F1 score was used as the evaluation indicator. Gated recursive neural network model achieved the F1 scores as 48.21% and 62.19%, which were more than ordinary recursive neural network model by nearly 1.5 percentage points. The results indicate that the gated recursive neural network can capture complicated features and enhance the performance on product property sentiment analysis. The proposed neural network model integrated with discrete features and word vector embedding achieved the F1 scores as 49.26% and 63.31%, which are all higher than baseline methods by 0.5 to 1.0 percentage points. The results show that discrete features and word vector embedding can help each other, on the other hand, it's also shown that the neural network model based on only word embedding has the room for improvement. Among the three task models, the pipeline model achieves the highest F1 scores. Thus, it's better to complete feature extraction and sentiment analysis separately.
Reference | Related Articles | Metrics
Fast learning algorithm of grammatical probabilities in multi-function radars based on Earley algorithm
CAO Shuai, WANG Buhong, LIU Xinbo, SHEN Haiou
Journal of Computer Applications    2016, 36 (9): 2636-2641.   DOI: 10.11772/j.issn.1001-9081.2016.09.2636
Abstract469)      PDF (890KB)(264)       Save
To deal with the probability learning problem in Multi-Function Radar (MFR) based on Stochastic Context-Free Grammar (SCFG) model, a new fast learning algorithm of grammatical probabilities in MFR based on Earley algorithm was presented on the basis of traditional Inside-Outside (IO) algorithm and Viterbi-Score (VS) algorithm. The intercepted radar data was pre-processed to construct an Earley parsing chart which can describe the derivation process. Furthermore, the best parsing tree was extracted from the parsing chart based on the criterion of maximum sub-tree probabilities. The modified IO algorithm and modified VS algorithm were utilized to realize the learning of grammatical probabilities and MFR parameter estimation. After getting the grammatical parameters, the state of MFR was estimated by Viterbi algorithm. Theoretical analysis and simulation results show that compared to the conventional IO algorithm and VS algorithm, the modified algorithm can effectively reduce the computation complexity and running time while keeping the same level of estimation accuracy, which validates that the grammatical probability learning speed can be improved with the proposed method.
Reference | Related Articles | Metrics
Beamforming based localization algorithm in 60GHz wireless local area networks
LIU Xing, ZHANG Hao, XU Lingwei
Journal of Computer Applications    2016, 36 (8): 2170-2174.   DOI: 10.11772/j.issn.1001-9081.2016.08.2170
Abstract396)      PDF (731KB)(341)       Save
Concerning ranging difficulties with 60GHz signals in Non Line of Sight (NLOS) conditions, a new positioning algorithm based on beamforming in Wireless Local Area Network (WLAN) was proposed. Firstly, the beamforming technology was applied to search the strongest path by adjusting receiving antennas along the channel path with the maximum power.The searching robustness was enhanced and the location coverage was expanded. Secondly, the time delay bias in NLOS conditions was modeled as a Gaussian random variable to reconstruct the NLOS measurements. Finally, to further improve the positioning accuracy, the outlier detection mechanism was introduced by setting a reasonable detection threshold. The localization simulation experiments were conducted on Matlab using STAs-STAs (STAtions-STAtions) channel model, the Time of Arrival (TOA) localization algorithm based on traditional coherent estimation method achieved the average positioning error at about 2m, and the probability of 1m localization accuracy was just 0.5% under NLOS conditions, while the proposed algorithm achieved the average positioning error at 1.02cm, and the probability of 1m localization accuracy reached 94%. Simulation results show that the beamforming technology is an effective solution to 60GHz localization in NLOS conditions, and the localization accuracy and the probability of successful localization are effectively improved.
Reference | Related Articles | Metrics
Research of domain ontology driven enterprise on-line analytical processing systems
LIU Xinrui, REN Fengyu, LEI Guoping
Journal of Computer Applications    2016, 36 (1): 254-259.   DOI: 10.11772/j.issn.1001-9081.2016.01.0254
Abstract409)      PDF (997KB)(362)       Save
At present the insufficient formal business knowledge participation in the process of On-Line Analytical Processing (OLAP) results in restriction and limitation to in-depth analysis. To overcome the limitations, a new approach for building an OLAP system was proposed based on domain ontology. Firstly, by analyzing the limitations of the existing ontology construction methods and using the similarity evaluation algorithm based on multiple features and weighted pattern of entity classes, a semi-automatic domain ontology construction method in which global top-level domain ontology was designed by experts after local ontologies were generated from databases was put forward to implement formalized description of mine-production domain knowledge. And then the key indicators of mine-production capacity were chosen as measurements and Multi-Dimensional Ontology (MDO) with business semantic concepts was built. Finally, the method was tested by a practical project of metal mine decision making system. The experimental results show that the proposed method can dynamically integrate heterogeneous information resource of mine production process and facilitate the unambiguous interpretation of query results, and discover association rules and implicit knowledge through the advantages of formalization expression and reasoning of domain ontology. Meanwhile, by high frequency and general concept views, it avoids query duplication and improves the performance of traditional OLAP systems.
Reference | Related Articles | Metrics
Lossless digital image coding method based on multi-directional hybrid differential
GAO Jian, YANG Ke, LIU Xingxing
Journal of Computer Applications    2015, 35 (9): 2648-2651.   DOI: 10.11772/j.issn.1001-9081.2015.09.2648
Abstract350)      PDF (591KB)(294)       Save
In view of the digital image coding, the authors proposed a multi-directional hybrid differential method based on the analysis of two-directional hybrid differential and 3-parameter variable length coding method. Firstly, multi-directional hybrid differential method was used to analyze local feature of the current pixel based on other four pixels nearby. Then according to the analysis results, an optimal differential direction for the current pixel was chosen from several primary differential directions. Compared with two-directional hybrid differential, multi-directional hybrid differential did not save direction flags. Primary directions contained four differential directions. Compared with results of two-directional hybrid differential, the entropy value of image processed by multi-direction hybrid differential was reduced by 8.2% and the bits per pixel was reduced by 11%. The experimental results show that this algorithm is useful for improving the coding efficiency by reducing the entropy value of digital image to a lower level.
Reference | Related Articles | Metrics
Malware behavior assessment system based on support vector machine
OUYANG Boyu, LIU Xin, XU Chan, WU Jian, AN Xiao
Journal of Computer Applications    2015, 35 (4): 972-976.   DOI: 10.11772/j.issn.1001-9081.2015.04.0972
Abstract625)      PDF (900KB)(644)       Save

Aiming at the problem that the classification accuracy in malware behavior analysis system was low,a malware classification method based on Support Vector Machine (SVM) was proposed. First, the risk behavior library which used software behavior results as characteristics was established manually. Then all of the software behaviors were captured and matched with the risk behavior library, and the matching results were converted to data suitable for SVM training through the conversion algorithm. In the selection of the SVM model, kernel function and parameters (C,g), a method combining the grid search and Genetic Algorithm (GA) was used to search optimization after theoretical analysis. A malware behavior assessment system based on SVM classification model was designed to verify the effectiveness of the proposed malware classification method. The experiments show that the false positive rate and false negative rate of the system were 5.52% and 3.04% respectively. It means that the proposed method outperforms K-Nearest Neighbor (KNN) and Naive Bayes (NB); its performance is at the same level with the BP neural network, however, it has a higer efficiency in training and classification.

Reference | Related Articles | Metrics
Transmission length design of IP network with wavelength-selectable reconfigurable optical add/drop multiplexer
XIONG Ying, MAO Xuesong, LIU Xing, WANG Yaling, JIN Gang
Journal of Computer Applications    2015, 35 (1): 27-30.   DOI: 10.11772/j.issn.1001-9081.2015.01.0027
Abstract488)      PDF (560KB)(513)       Save

For dealing with the problem of low efficiency and high maintenance cost when multi-point breakdown or change occurs in Wavelength Division Multiplexing (WDM) network with high speed and large capacity, the component of Reconfigurable Optical Add/Drop Multiplexer (ROADM) was used to construct a flexible network. Firstly, the 5-node network configuration model was provided. Then, the relation between loss and transmission length was investigated when optical network was composed of ROADM under dynamic conditions. The design flow of network transmission length was proposed. Next, a 5-node bi-directional fiber ring experiment network was constructed, and the optical loss characteristics were measured. Finally, based on the analysis of experiment data, it shows that the computed optical loss value and the measured loss value are approximately equal (0.8 dB difference). Thus, the feasibility of the design is verified, which assures the reliable transmission between nodes.

Reference | Related Articles | Metrics
Graph embedding method integrated multiscale features
LI Zhijie LI Changhua YAO Peng LIU Xin
Journal of Computer Applications    2014, 34 (10): 2891-2894.   DOI: 10.11772/j.issn.1001-9081.2014.10.2891
Abstract179)      PDF (797KB)(295)       Save

In the domain of structural pattern recognition, the existing graph embedding methods lack versatility and have high computation complexity. A new graph embedding method integrated with multiscale features based on space syntax theory was proposed to solve this problem. This paper extracted the global, local and detail features to construct feature vector depicting the graph feature by multiscale histogram. The global features included vertex number, edge number, and intelligible degree. The local features referred to node topological feature, edge domain features dissimilarity and edge topological features dissimilarity. The detail features comprised numerical and symbolic attributes on vertex and edge. In this way, the structural pattern recognition was converted into statistical pattern recognition, thus Support Vector Machine (SVM) could be applied to achieve graph classification. The experimental results show that the proposed graph embedding method can achieve higher classifying accuracy in different graph datasets. Compared with other graph embedding methods, the proposed method can adequately render the graphs topology, merge the non-topological features in terms of the graphs domain property, and it has a favorable universality and low computation complexity.

Reference | Related Articles | Metrics
Fast handover mechanism based on Delaunay triangulation for FMIPv6
LI Zhenjun LIU Xing
Journal of Computer Applications    2013, 33 (10): 2707-2710.  
Abstract503)      PDF (704KB)(593)       Save
To solve the packet loss problem caused by inaccurate prediction of New Access Router (NAR) in the Fast Handover for mobile IPv6 (FMIPv6), this paper proposed a triangulationbased fast handoff mechanism (TFMIPv6). In TFMIPv6, a triangulation algorithm was used to split the network into virtual triangle topology, and the tunnel was established among adjacent access routers. The candidate target Access Points (AP) were selected to quickly recalculate the new relay addresses for the mobile nodes, and packets were buffered in two potential NARs during handover. The experimental results illustrate that TFMIPv6 protocol achieves lower handoff latency and packet loss rate compared with FMIPv6.
Related Articles | Metrics
Improved algorithm of thematic term extraction based on increment term-set frequency from Chinese document
LIU Xinglin
Journal of Computer Applications    2013, 33 (09): 2546-2549.   DOI: 10.11772/j.issn.1001-9081.2013.09.2546
Abstract673)      PDF (607KB)(434)       Save
In order to solve the problem that the thematic term extraction algorithm based on incremental term-set frequency cannot extract compound-words, this paper added text preprocessing, compound-word recognition, to the original algorithm. Compound-word recognition was based on part-of-speech detection and word co-occurrence directed graph, and corrected the results of segmentation. When generating thematic term candidate set, the position of each word was examined and determined its weight. And then, the total weight of the same word was accumulated, and a candidate set of thematic terms was generated by the weight from high to low. When this algorithm got a term from thematic term candidate set, the increment frequency was calculated. If the increment was less than a given threshold, the algorithm stopped; otherwise, the thematic term candidate was added into thematic term set. The experimental results show this algorithm achieves sound effects, the thematic terms acquired by this algorithm can more aptly reflect the main contents of the article, and the satisfaction of thematic term increased 5% than the original algorithm.
Related Articles | Metrics
Feature evaluation of radar signal based on aggregation, discreteness and divisibility
DENG Yanli JIN Weidong LI Jiahui LIU Xin
Journal of Computer Applications    2013, 33 (07): 1946-1949.   DOI: 10.11772/j.issn.1001-9081.2013.07.1946
Abstract740)      PDF (801KB)(504)       Save
Quality of intrapulse feature about radar signals has proved to be a significant foundation to decide whether the signals can be differentiated effectively. For evaluating the quality quantitatively, a method adopting fuzziness and close-degree to evaluate intrapulse feature aggregation and discreteness of signals was proposed in this paper. Space distribution of intrapluse feature about radar signals was analyzed with this method, while intrapulse feature aggregation was evaluated by fuzziness and intrapulse feature discreteness was evaluated by close-degree. And for the overlapping states of feature space distribution, a linear separable measure of intrapluse feature about radar signals was put forward by within-class distance, between-class distance and linear discriminated criterion. The simulation results, based on the experiments of two intrapulse features extracted via time-frequency atom approach about five kinds of radar signal, show that the method and measure proposed in the paper are effective and feasible. It provides a new idea and approach for quantitatively evaluating features of the radar emitter signal.
Reference | Related Articles | Metrics
Feature extraction of energy entropy of ECG signal on meridian systems using wavelet packet analysis
LIU Xin HE Hong TAN Yonghong
Journal of Computer Applications    2013, 33 (04): 1176-1178.   DOI: 10.3724/SP.J.1087.2013.01176
Abstract936)      PDF (603KB)(506)       Save
In order to study meridian characteristics, a feature extraction method of ElectroCardioGraph (ECG) signal on the meridian based on wavelet packet analysis and energy entropy was proposed. A meridian measuring experiment was firstly built to complete the acquisition of meridian data. Then meridian ECG signals were decomposed by a three layer wavelet packet decomposition. Energy entropy features of meridian ECG signals were extracted according to the results of signal reconstruction. After that, both K-means and Fuzzy C-Means (FCM) clustering techniques realized the effective partition of acupoints and non-acupoints. The derived clustering results indicate that the energy entropy values of ECG signals on the acupoints are obviously higher than those on the non-meridian points. It can be used as a powerful scientific basis for the discrimination of acupoints and non-acupoints.
Reference | Related Articles | Metrics
Algorithm of near-duplicate image detection based on Bag-of-words and Hash coding
WANG Yutian YUAN Jiangtao QIN Haiquan LIU Xin
Journal of Computer Applications    2013, 33 (03): 667-669.   DOI: 10.3724/SP.J.1087.2013.00667
Abstract911)      PDF (529KB)(523)       Save
To solve the low efficiency and precision of the traditional methods, a near-duplicate image detection algorithm based on Bag-of-words and Hash coding was proposed in this paper. Firstly, a 500-dimensional feature vector was used to represent an image by Bag-of-words; secondly, feature dimension was reduced by Principal Component Analysis (PCA) and Scale-Invariant Feature Transform (SIFT) and features were encoded by Hash coding; finally, dynamic distance metric was used to detect near-duplicate images. The experimental results show that the algorithm based on Bag-of-words and Hash coding is feasible in detecting near-duplicate images. This algorithm can achieve a good balance between precision and recall rate: the precision rate can reach 90%-95%, and entire recall rate can reach 70%-80%.
Reference | Related Articles | Metrics
Fully anonymous multi-service subscription system without random oracles
LIU Xin LEI Wenqing
Journal of Computer Applications    2013, 33 (02): 417-429.   DOI: 10.3724/SP.J.1087.2013.00417
Abstract870)      PDF (1076KB)(391)       Save
Lately, Canard et al. (CANARD S, JAMBERT A. Untraceability and profiling are not mutually exclusive [C]// TrustBus 2010: Proceedings of the 7th International Conference on Trust, Privacy and Security in Digital Business, LNCS 6264. Berlin: Springer-Verlag, 2010: 117-128) introduced the notion of multi-service subscription and proposed several instantiations. Unfortunately, their systems only satisfied a weaker variant of anonymity called revocable-anonymity and they were not fit for "pay-per-use" services. To this end, a revised multi-service subscription system was put forward to extending Canard et al's system. The new system achieved pay-per-use subscriptions by incorporating the anonymous payment system raised by Liu et al. (LIU J K, AU M H, SUSILO W, et al. Enhancing location privacy for electric vehicles (at the right time) [EB/OL]. [2012-08-01]. http://eprint.iacr.org/2012/342). To allow users to prove in zero-knowledge that their account balance is enough for making a payment for the required access, it also utilized the Peng-Bao range proof for small ranges. Furthermore, it was constructed on several 4-round perfect zero-knowledge proofs of knowledge, which were obtained by applying a technique by Cramer et al. to the underlying Sigma-protocols. Compared with typical systems in the literature, the new solution gains advantages in terms of security. Concretely, it can be proved secure in the standard model. Moreover, it matches the strongest level of three crucial security notions, such as inseparability for spendable tokens, anonymity for users, and zero-knowledge for underlying proof systems.
Related Articles | Metrics
LEACH-DRT: dynamic round-time algorithm based on low energy adaptive clustering hierarchy protocol
ZHONG Yiyang LIU Xingchang
Journal of Computer Applications    2013, 33 (01): 120-123.   DOI: 10.3724/SP.J.1087.2013.00120
Abstract933)      PDF (591KB)(541)       Save
Regarding the disadvantages of uneven clustering and fixed round time in Low Energy Adaptive Clustering Hierarchy (LEACH) protocol, the Dynamic Round-Time (DRT) algorithm based on LEACH (LEACH-DRT) was proposed to prolong network life time. The algorithm obtained clusters' and member nodes' information from base station, and then figured out clusters' round time according to the number of clusters' member nodes and clusters' remaining energy. The time information was sent to different clusters by base station. Clusters began to work according to the time information received. Meanwhile, by using the new cluster head election mechanism, it avoided the data loss and useless energy consumption caused by the cluster's insufficient energy. The analysis and simulation results show that the improved algorithm prolongs about four times network life time and reduces the probability of data loss by 18% than LEACH protocol. It also demonstrates that LEACH-DRT algorithm achieves a better application effect at balancing energy consumption and data loss rate.
Reference | Related Articles | Metrics
Robust and efficient remote authentication with key agreement protocol
TANG Hong-bin LIU Xin-song
Journal of Computer Applications    2012, 32 (05): 1381-1384.  
Abstract1531)      PDF (2096KB)(689)       Save
Password-based authentication and key exchange protocol have been widely used in various network services due to easy memory of password. Unfortunately, password-based authentication scheme also suffers from attacks because of the low entropy of password. In the year 2011, Islam et al.(ISLAM SK H, BISWAS G P. Improved remote login scheme based on ECC. IEEE-International Conference on Recent Trends in Information Technology. Washington, DC: IEEE Computer Society, 2011: 1221-1226)proposed an improved remote login scheme based on Elliptic Curve Cryptography (ECC).Whereas, the scheme was vulnerable to stolen-verifier and impersonation attacks and failed to provide mutual authentication. Therefore, the authors proposed a password-based Remote Authentication with Key Agreement (RAKA) protocol using ECC to tackle the problems in Islam et al.'s scheme. RAKA was based on Elliptic Curve Discrete Logarithm Problem (ECDLP) and needed to compute six elliptic curve scale multiplications and seven hash function operations during a protocol run. The efficiency improves by about 15%〖BP(〗 percent〖BP)〗. It is more secure and efficient than Islam et al.'s scheme.
Reference | Related Articles | Metrics
Edge-preserving filter with similarity noise detection for impulse noise reduction
LIU Xin GE Hong-wei XU Bing-chun
Journal of Computer Applications    2012, 32 (03): 739-741.   DOI: 10.3724/SP.J.1087.2012.00739
Abstract1077)      PDF (479KB)(531)       Save
In order to improve the filtering effect of noise image, this paper put forward a new filtering algorithm. This algorithm consisted of three stages. Firstly, the similarities of the pixels were used in the image to detect the impulse noise. Then the filter window was divided into eight main directions to determine the directions of the edges, and at last these impulse noises were restored using an edge-preserving method. The simulation results indicate that this algorithm can not only accurately detect the noise points, but aslo protect the noise-free pixels and the boundaries in the noise image when the noise density is small.
Reference | Related Articles | Metrics
Hidden identity-based signature scheme with distributed open authorities
LIU Xin
Journal of Computer Applications    2012, 32 (03): 699-704.   DOI: 10.3724/SP.J.1087.2012.00699
Abstract1494)      PDF (1095KB)(597)       Save
Hidden identity-based signature schemes from bilinear maps do not achieve exculpability and Chosen-Ciphertext Attack (CCA) anonymity, while schemes of this type built on RSA groups suffer from significant communication and computation overheads. Concerning this situation, an improved scheme with distributed open authorities was put forward, which satisfied exculpability by making use of the block messages signature. It achieved efficient distribution of the open authority by applying distributed key extraction and simultaneous proof of knowledge to the underlying threshold encryption scheme. Furthermore, to cope with the shortcomings of traditional serial registration, i.e., being vulnerable to the denial-of-service attack, its registration protocol was enhanced to be concurrent-secure by using the method of committed proof of knowledge. In the random oracle model, the proposed scheme could be proved to fulfill all the required properties. Performance comparison shows that the resultant signature is shorter and the algorithms (i.e., Sign and Verify) are more efficient. Moreover, the process of threshold decryption by trusted servers is proved to be concurrently-secure and it is also immune to adaptive adversaries.
Reference | Related Articles | Metrics
Cryptanalysis and improvement of TAKASIP protocol
TANG Hong-bin LIU Xin-song
Journal of Computer Applications    2012, 32 (02): 468-471.   DOI: 10.3724/SP.J.1087.2012.00468
Abstract1038)      PDF (680KB)(468)       Save
Session Initiation Protocol (SIP) provides authentication and session key agreement to ensure the security of the successive session. In 2010, Yoon et al. (YOON E-J, YOO K-Y. A three-factor authenticated key agreement scheme for SIP on elliptic curves. NSS '10: 4th International Conference on Network and System Security. Piscataway: IEEE, 2010: 334-339.) proposed a three-factor authenticated key agreement scheme named TAKASIP for SIP. However, the scheme is vulnerable to insider attack, server-spoofing attack, off-line password attack, and losing token attack. Moreover, it does not provide mutual authentication. To overcome these flaws of TAKASIP, a new three-factor authentication scheme named ETAKASIP based on Elliptic Curve Cryptosystem (ECC) was proposed. ETAKASIP, on the basis of elliptic curve discrete logarithm problem, provides higher security than TAKASIP. It needs 7 elliptic curve scalar multiplication operations, 1 additional operation and up to 6 Hash operations, and of high efficiency.
Reference | Related Articles | Metrics
Mixed compression algorithm for error-diffusion halftone image based on look-up table
GENG Ye KONG Yue-ping LIU Xin
Journal of Computer Applications    2011, 31 (05): 1221-1223.   DOI: 10.3724/SP.J.1087.2011.01221
Abstract1169)      PDF (478KB)(844)       Save
A mixed compression algorithm for error-diffusion image that could overcome the disadvantages of the conventional binary image loss coding techniques and combined with the inverse half toning method was proposed. Look-Up-Table (LUT) inverse half toning method was used to convert error-diffusion image back to the contone image, then an improved Discrete Cosine Transform (DCT) coding algorithm was constructed to get higher compression rate. The experimental results indicate that the proposed algorithm fits to the error-diffusion image, and the quality of the decoding image is well.
Related Articles | Metrics